skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Renaut, Rosemary A"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The singular value decomposition (SVD) of a reordering of a matrix A can be used to determine an efficient Kronecker product (KP) sum approximation to A. We present the use of an approximate truncated SVD (TSVD) to find the KP approximation, and contrast using a randomized singular value decomposition algorithm (RSVD), a new enlarged Golub Kahan Bidiagonalization algorithm (EGKB) and the exact TSVD. The EGKB algorithm enlarges the Krylov subspace beyond a given rank for the desired approximation. A suitable rank is determined using an automatic stopping test. We also contrast the use of single and double precision arithmetic to find the approximate TSVDs. To illustrate the accuracy and efficiency in terms of memory and computational cost of these approximate KPs, we consider the solution of the total variation regularized image deblurring problem using the split Bregman algorithm implemented in double precision. Together with an efficient implementation for the reordering of A we demonstrate that the approximate KP sum can be obtained using a TSVD, and that the new EGKB algorithm contrasts favorably with the use of the RSVD. These results verify that it is feasible to use single precision when estimating a KP sum from an approximate TSVD. 
    more » « less
    Free, publicly-accessible full text available May 6, 2026
  2. Abstract The Tell Atlas of Algeria has a huge potential for hydrothermal energy from over 240 thermal springs with temperatures up to$$98^\circ$$ 98 C in the Guelma area. The most exciting region is situated in the northeastern part which is known to have the hottest hydrothermal systems. In this work, we use a high-resolution gravity study to identify the location and origin of the hot water, and how it reaches the surface. Gravimetric data analysis shows the shapes of the anomalies arising due to structures at different subsurface depths. The calculation of the energy spectrum for the data also showcases the depths of the bodies causing anomalies. 3D-Euler deconvolution is applied to estimate the depths of preexisting tectonic structures (faults). These preprocessing steps assist with assessing signal attenuation that impacts the Bouguer anomaly map. The residual anomaly is used in a three-dimensional inversion to provide a subsurface density distribution model that illustrates the locations of the origin of the dominant subsurface thermal systems. Overall, the combination of these standard processing steps applied to the measurements of gravity data at the surface provides new insights about the sources of the hydrothermal systems in the Hammam Debagh and Hammam Ouled Ali regions. Faults that are key to the water infiltrating from depth to the surface are also identified. These represent the pathway of the hot water in the study area. 
    more » « less
  3. The mixed Lp-norm, 0 ≤ p ≤ 2, stabilization algorithm is flexible for constructing a suite of subsurface models with either distinct, or a combination of, smooth, sparse, or blocky structures. This general purpose algorithm can be used for the inversion of data from regions with different subsurface characteristics. Model interpretation is improved by simulta- neous inversion of multiple data sets using a joint inversion approach. An effective and general algorithm is presented for the mixed Lp-norm joint inversion of gravity and magnetic data sets. The imposition of the structural cross-gradient enforces similarity between the reconstructed models. For efficiency the implementation relies on three crucial realistic details; (i) the data are assumed to be on a uniform grid providing sensitivity matrices that decompose in block Toeplitz Toeplitz block form for each depth layer of the model domain and yield efficiency in storage and computation via 2D fast Fourier transforms; (ii) matrix-free implementation for calculating derivatives of parameters reduces memory and computational overhead; and (iii) an alternating updating algorithm is employed. Balancing of the data misfit terms is imposed to assure that the gravity and magnetic data sets are fit with respect to their individual noise levels without overfitting of either model. Strategies to find all weighting parameters within the objective function are described. The algorithm is validated on two synthetic but complicated models. It is applied to invert gravity and magnetic data acquired over two kimberlite pipes in Botswana, producing models that are in good agreement with borehole information available in the survey area. 
    more » « less
  4. SUMMARY A fast algorithm for the large-scale joint inversion of gravity and magnetic data is developed. The algorithm uses a non-linear Gramian constraint to impose correlation between the density and susceptibility of the reconstructed models. The global objective function is formulated in the space of the weighted parameters, but the Gramian constraint is implemented in the original space, and the non-linear constraint is imposed using two separate Lagrange parameters, one for each model domain. It is significant that this combined approach, using the two spaces provides more similarity between the reconstructed models. Moreover, it is shown theoretically that the gradient for the use of the unweighted space is not a scalar multiple of that used for the weighted space, and hence cannot be accounted for by adjusting the Lagrange parameters. It is assumed that the measured data are obtained on a uniform grid and that a consistent regular discretization of the volume domain is imposed. Then, the sensitivity matrices exhibit a block-Toeplitz-Toeplitz-block structure for each depth layer of the model domain, and both forward and transpose operations with the matrices can be implemented efficiently using two dimensional fast Fourier transforms. This makes it feasible to solve for large scale problems with respect to both computational costs and memory demands, and to solve the non-linear problem by applying iterative methods that rely only on matrix–vector multiplications. As such, the use of the regularized reweighted conjugate gradient algorithm, in conjunction with the structure of the sensitivity matrices, leads to a fast methodology for large-scale joint inversion of geophysical data sets. Numerical simulations demonstrate that it is possible to apply a non-linear joint inversion algorithm, with Lp-norm stabilisers, for the reconstruction of large model domains on a standard laptop computer. It is demonstrated, that while the p = 1 choice provides sparse reconstructed solutions with sharp boundaries, it is also possible to use p = 2 in order to provide smooth and blurred models. The methodology is used for inverting gravity and magnetic data obtained over an area in northwest of Mesoproterozoic St Francois Terrane, southeast of Missouri, USA. 
    more » « less
  5. null (Ed.)
    SUMMARY We discuss the focusing inversion of potential field data for the recovery of sparse subsurface structures from surface measurement data on a uniform grid. For the uniform grid, the model sensitivity matrices have a block Toeplitz Toeplitz block structure for each block of columns related to a fixed depth layer of the subsurface. Then, all forward operations with the sensitivity matrix, or its transpose, are performed using the 2-D fast Fourier transform. Simulations are provided to show that the implementation of the focusing inversion algorithm using the fast Fourier transform is efficient, and that the algorithm can be realized on standard desktop computers with sufficient memory for storage of volumes up to size n ≈ 106. The linear systems of equations arising in the focusing inversion algorithm are solved using either Golub–Kahan bidiagonalization or randomized singular value decomposition algorithms. These two algorithms are contrasted for their efficiency when used to solve large-scale problems with respect to the sizes of the projected subspaces adopted for the solutions of the linear systems. The results confirm earlier studies that the randomized algorithms are to be preferred for the inversion of gravity data, and for data sets of size m it is sufficient to use projected spaces of size approximately m/8. For the inversion of magnetic data sets, we show that it is more efficient to use the Golub–Kahan bidiagonalization, and that it is again sufficient to use projected spaces of size approximately m/8. Simulations support the presented conclusions and are verified for the inversion of a magnetic data set obtained over the Wuskwatim Lake region in Manitoba, Canada. 
    more » « less